26 research outputs found

    Single-Scale Fusion: An Effective Approach to Merging Images

    No full text
    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be imple- mented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates impor- tant redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches

    A scale invariant detector based on local energy model for matching images

    Get PDF
    Finding correspondent feature points represents a challenge for many decades and has involved a lot of preoccupation in computer vision. In this paper we introduce a new method for matching images. Our detection algorithm is based on the local energy model, a concept that emulates human vision system. For true scale invariance we extend this detector using automatic scale selection principle. Thus, at every scale level we identify points where Fourier components of the image are maximally in phase and then we extract only feature points that maximize a normalized derivatives function through scale space. To find correspondent points a new method based on the Normalized Sum of Squared Differences (NSSD) is introduced. NSSD is a classical matching measure but is limited to only the small baseline case. Our descriptor is adapted to characteristic scale and also is rotation invariant. Finally, experimental results demonstrate that our algorithm is reliable for significant modification of scale, rotation and variation of image illumination

    Color Channel Transfer for Image Dehazing

    No full text
    In this letter we introduce a simple but effective concept, Color Channel Transfer (CCT), that is able to substantially improve the performance of various dehazing techniques. CCT is motivated by a key observation: in scattering media the information from at least one color channel presents high attenuation. To compensate for the loss of information in one color channel, CCT employs a color-transfer strategy and operates in a color opponent space that helps to compensate automatically the chromatic loss. The reference is computed by combining the details and saliency of the initial image with uniform gray image that assures a balanced chromatic distribution. The extensive qualitative and quantitative experiments demonstrate the utility of CCT as a pre-processing step for various dehazing problems such as day-time dehazing,night-time dehazing, and underwater image dehazing

    Color Channel Transfer for Image Dehazing

    No full text

    Color Channel Compensation (3C): A Fundamental Pre-Processing Step for Image Enhancement

    No full text
    This article introduces a novel solution to improve image enhancement in terms of color appearance. Our approach, called Color Channel Compensation (3C), overcomes artifacts resulting from the severely non-uniform color spectrum distribution encountered in images captured under hazy night-time conditions, underwater, or under non-uniform artificial illumination. Our solution is founded on the observation that, under such adverse conditions, the information contained in at least one color channel is close to completely lost, making the traditional enhancing techniques subject to noise and color shifting. In those cases, our pre-processing method proposes to reconstruct the lost channel based on the opponent color channel. Our algorithm subtracts a local mean from each opponent color pixel. Thereby, it partly recovers the lost color from the two colors (red-green or blue-yellow) involved in the opponent color channel. The proposed approach, whilst simple, is shown to consistently improve the outcome of conventional restoration methods. To prove the utility of our 3C operator, we provide an extensive qualitative and quantitative evaluation for white balancing, image dehazing, and underwater enhancement applications
    corecore